A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
نویسندگان
چکیده
We present an algorithm that generalizes the randomized incremental subgradient method with fixed stepsize due to Nedić and Bertsekas [SIAM J. Optim., 12 (2001), pp. 109–138]. Our novel algorithm is particularly suitable for distributed implementation and execution, and possible applications include distributed optimization, e.g., parameter estimation in networks of tiny wireless sensors. The stochastic component in the algorithm is described by a Markov chain, which can be constructed in a distributed fashion using only local information. We provide a detailed convergence analysis of the proposed algorithm and compare it with existing, both deterministic and randomized, incremental subgradient methods.
منابع مشابه
Incremental Stochastic Subgradient Algorithms for Convex Optimization
This paper studies the effect of stochastic errors on two constrained incremental subgradient algorithms. The incremental subgradient algorithms are viewed as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a particular agent of a distributed network. First, the standard cyclic incremental subgradient algorit...
متن کاملAcceleration Method Combining Broadcast and Incremental Distributed Optimization Algorithms
This paper considers a networked system consisting of an operator, which manages the system, and a finite number of subnetworks with all users, and studies the problem of minimizing the sum of the operator’s and all users’ objective functions over the intersection of the operator’s and all users’ constraint sets. When users in each subnetwork can communicate with each other, they can implement ...
متن کاملFixed Point Optimization Algorithms for Distributed Optimization in Networked Systems
This paper considers a networked system with a finite number of users and deals with the problem of minimizing the sum of all users’ objective functions over the intersection of all users’ constraint sets, onto which the projection cannot be easily implemented. The main objective of this paper is to devise distributed optimization algorithms, which enable each user to find the solution of the p...
متن کاملIncremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey
We survey incremental methods for minimizing a sum ∑m i=1 fi(x) consisting of a large number of convex component functions fi. Our methods consist of iterations applied to single components, and have proved very effective in practice. We introduce a unified algorithmic framework for a variety of such methods, some involving gradient and subgradient iterations, which are known, and some involvin...
متن کاملClustering in Distributed Incremental Estimation in Wireless Sensor Networks
Energy efficiency, low latency, high estimation accuracy, and fast convergence are important goals in distributed incremental estimation algorithms for sensor networks. One approach that adds flexibility in achieving these goals is clustering. In this paper, the framework of distributed incremental estimation is extended by allowing clustering amongst the nodes. Among the observations made is t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- SIAM Journal on Optimization
دوره 20 شماره
صفحات -
تاریخ انتشار 2009